Load Balancing for Parallel Computing on Distributed Computers

نویسنده

  • Y. P. Chien
چکیده

Distributed processing can be used for solving large computation intensive problems. A distributed system may include parallel supercomputers, networked workstations and PCs. This paper discusses load balancing of a parallel job in a distributed computation environment. The information necessary for load balancing is studied. The software tools that automatically collect the information and perform load balancing is described. Parallel computational fluid dynamics examples are used to demonstrate the effectiveness of the load balancing method. Key-Words: distributed computing, dynamic load balancing.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Molecular Dynamics with Load Balancing on Distributed - Memory Mimd Computers

We report two aspects of a computational molecular dynamics study of large-scale problems on a distributed-memory MIMD parallel computer: (1) efficiency and scalability results on Intel Paragon parallel computers with up to 512 nodes and (2) a new method for dynamic load balancing.

متن کامل

An introduction to load balancing for parallel raytracing on HDC systems

Heterogeneous distributed computing (HDC) systems, which exploit the aggregate power of a network of workstations and personal computers are an inexpensive alternative to the dedicated parallel supercomputing systems. As these systems are widely available in academic and industrial environments, it is becoming popular to use these computing resources to solve timeconsuming applications. The foc...

متن کامل

Load Balancing Problem for Parallel Computers with Distributed Memory

This paper deals with load balancing of parallel algorithms for distributedmemory computers. The parallel versions of BLAS subroutines for matrix-vector product and LU factorization are considered. Two task partitioning algorithms are investigated and speed-ups are calculated. The cases of homogeneous and heterogeneous collections of computers/processors are studied, and special partitioning al...

متن کامل

Runtime Incremental Parallel Scheduling (RIPS) on Distributed Memory Computers - Parallel and Distributed Systems, IEEE Transactions on

Runtime Incremental Parallel Scheduling (RIPS) is an alternative strategy to the commonly used dynamic scheduling. In this scheduling strategy, the system scheduling activity alternates with the underlying computation work. RIPS utilizes the advanced parallel scheduling technique to produce a low-overhead, high-quality load balancing, as well as adapting to irregular applications. This paper pr...

متن کامل

Static versus dynamic heterogeneous parallel schemes to solve the symmetric tridiagonal eigenvalue problem

Computation of the eigenvalues of a symmetric tridiagonal matrix is a problem of great relevance. Many linear algebra libraries provide subroutines for solving it. But none of them is oriented to be executed in heterogeneous distributed memory multicomputers. In this work we focus on this kind of platforms. Two different load balancing schemes are presented and implemented. The experimental res...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2000